Online Myopic Network Covering

نویسندگان

  • Konstantin Avrachenkov
  • Prithwish Basu
  • Giovanni Neglia
  • Bruno F. Ribeiro
  • Donald F. Towsley
چکیده

Efficient marketing or awareness-raising campaigns seek to recruit n influential individuals – where n is the campaign budget – that are able to cover a large target audience through their social connections. So far most of the related literature on maximizing this network cover assumes that the social network topology is known. Even in such a case the optimal solution is NP-hard. In practice, however, the network topology is generally unknown and needs to be discovered on-the-fly. In this work we consider an unknown topology where recruited individuals disclose their social connections (a feature known as one-hop lookahead). The goal of this work is to provide an efficient greedy online algorithm that recruits individuals as to maximize the size of target audience covered by the campaign. We propose a new greedy online algorithm, Maximum Expected d-Excess Degree (MEED), and provide, to the best of our knowledge, the first detailed theoretical analysis of the cover size of a variety of well known network sampling algorithms on finite networks. Our proposed algorithm greedily maximizes the expected size of the cover. For a class of random power law networks we show that MEED simplifies into a straightforward procedure, which we denote MOD (Maximum Observed Degree). We substantiate our analytical results with extensive simulations and show that MOD significantly outperforms all analyzed myopic algorithms. We note that performance may be further improved if the node degree distribution is known or can be estimated online during the campaign.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Identification of Recurrent Neural Networks by Bayesian Interrogation Techniques

We introduce novel online Bayesian methods for the identification of a family of noisy recurrent neural networks (RNNs). We present Bayesian active learning techniques for stimulus selection given past experiences. In particular, we consider the unknown parameters as stochastic variables and use A-optimality and D-optimality principles to choose optimal stimuli. We derive myopic cost functions ...

متن کامل

A Reliable Multi-objective p-hub Covering Location Problem Considering of Hubs Capabilities

In the facility location problem usually reducing total transferring cost and time are common objectives. Designing of a network with hub facilities can improve network efficiency. In this study a new model is presented for P-hub covering location problem. In the p-hub covering problem it is attempted to locate hubs and allocate customers to established hubs while allocated nodes to hubs are in...

متن کامل

Online Covering with Sum of $ell_q$-Norm Objectives

We consider fractional online covering problems with lq-norm objectives. The problem of interest is of the form min{f(x) : Ax ≥ 1, x ≥ 0} where f(x) = ∑ e ce‖x(Se)‖qe is the weighted sum of lq-norms and A is a non-negative matrix. The rows of A (i.e. covering constraints) arrive online over time. We provide an online O(log d+ log ρ)-competitive algorithm where ρ = max aij min aij and d is the m...

متن کامل

Online Covering with Sum of ̧q-Norm Objectives

We consider fractional online covering problems with ̧ q -norm objectives. The problem of interest is of the form min{f(x) : Ax Ø 1, x Ø 0} where f(x) = q e c e Îx(S e )Î qe is the weighted sum of ̧ q -norms and A is a non-negative matrix. The rows of A (i.e. covering constraints) arrive online over time. We provide an online O(log d + log fl)-competitive algorithm where fl = max aij min aij an...

متن کامل

Greedy D{\ensuremath{\Delta}}-Approximation Algorithm for Covering with Arbitrary Constraints and Submodular Cost

This paper describes a greedy ∆-approximation algorithm for MONOTONE COVERING, a generalization of many fundamental NP-hard covering problems. The approximation ratio ∆ is the maximum number of variables on which any constraint depends. (For example, for vertex cover, ∆ is 2.) The algorithm unifies, generalizes, and improves many previous algorithms for fundamental covering problems such as ver...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1212.5035  شماره 

صفحات  -

تاریخ انتشار 2012